The growing interest in intelligent services and privacy protection for mobile devices has given rise to the widespread application of federated learning in Multi-access Edge Computing (MEC). Diverse user behaviors call for personalized services with heterogeneous Machine Learning (ML) models on different devices. Federated Multi-task Learning (FMTL) is proposed to train related but personalized ML models for different devices, whereas previous works suffer from excessive communication overhead during training and neglect the model heterogeneity among devices in MEC. Introducing knowledge distillation into FMTL can simultaneously enable efficient communication and model heterogeneity among clients, whereas existing methods rely on a public dataset, which is impractical in reality. To tackle this dilemma, Federated MultI-task Distillation for Multi-access Edge CompuTing (FedICT) is proposed. FedICT direct local-global knowledge aloof during bi-directional distillation processes between clients and the server, aiming to enable multi-task clients while alleviating client drift derived from divergent optimization directions of client-side local models. Specifically, FedICT includes Federated Prior Knowledge Distillation (FPKD) and Local Knowledge Adjustment (LKA). FPKD is proposed to reinforce the clients' fitting of local data by introducing prior knowledge of local data distributions. Moreover, LKA is proposed to correct the distillation loss of the server, making the transferred local knowledge better match the generalized representation. Experiments on three datasets show that FedICT significantly outperforms all compared benchmarks in various data heterogeneous and model architecture settings, achieving improved accuracy with less than 1.2% training communication overhead compared with FedAvg and no more than 75% training communication round compared with FedGKT.
translated by 谷歌翻译
The task of referring video object segmentation aims to segment the object in the frames of a given video to which the referring expressions refer. Previous methods adopt multi-stage approach and design complex pipelines to obtain promising results. Recently, the end-to-end method based on Transformer has proved its superiority. In this work, we draw on the advantages of the above methods to provide a simple and effective pipeline for RVOS. Firstly, We improve the state-of-the-art one-stage method ReferFormer to obtain mask sequences that are strongly correlated with language descriptions. Secondly, based on a reliable and high-quality keyframe, we leverage the superior performance of video object segmentation model to further enhance the quality and temporal consistency of the mask results. Our single model reaches 70.3 J &F on the Referring Youtube-VOS validation set and 63.0 on the test set. After ensemble, we achieve 64.1 on the final leaderboard, ranking 1st place on CVPR2022 Referring Youtube-VOS challenge. Code will be available at https://github.com/Zhiweihhh/cvpr2022-rvos-challenge.git.
translated by 谷歌翻译
Learning generalizable policies that can adapt to unseen environments remains challenging in visual Reinforcement Learning (RL). Existing approaches try to acquire a robust representation via diversifying the appearances of in-domain observations for better generalization. Limited by the specific observations of the environment, these methods ignore the possibility of exploring diverse real-world image datasets. In this paper, we investigate how a visual RL agent would benefit from the off-the-shelf visual representations. Surprisingly, we find that the early layers in an ImageNet pre-trained ResNet model could provide rather generalizable representations for visual RL. Hence, we propose Pre-trained Image Encoder for Generalizable visual reinforcement learning (PIE-G), a simple yet effective framework that can generalize to the unseen visual scenarios in a zero-shot manner. Extensive experiments are conducted on DMControl Generalization Benchmark, DMControl Manipulation Tasks, Drawer World, and CARLA to verify the effectiveness of PIE-G. Empirical evidence suggests PIE-G improves sample efficiency and significantly outperforms previous state-of-the-art methods in terms of generalization performance. In particular, PIE-G boasts a 55% generalization performance gain on average in the challenging video background setting. Project Page: https://sites.google.com/view/pie-g/home.
translated by 谷歌翻译
This technical report briefly describes our JDExplore d-team's Vega v2 submission on the SuperGLUE leaderboard. SuperGLUE is more challenging than the widely used general language understanding evaluation (GLUE) benchmark, containing eight difficult language understanding tasks, including question answering, natural language inference, word sense disambiguation, coreference resolution, and reasoning. [Method] Instead of arbitrarily increasing the size of a pretrained language model (PLM), our aim is to 1) fully extract knowledge from the input pretraining data given a certain parameter budget, e.g., 6B, and 2) effectively transfer this knowledge to downstream tasks. To achieve goal 1), we propose self-evolution learning for PLMs to wisely predict the informative tokens that should be masked, and supervise the masked language modeling (MLM) process with rectified smooth labels. For goal 2), we leverage the prompt transfer technique to improve the low-resource tasks by transferring the knowledge from the foundation model and related downstream tasks to the target task. [Results] According to our submission record (Oct. 2022), with our optimized pretraining and fine-tuning strategies, our 6B Vega method achieved new state-of-the-art performance on 4/8 tasks, sitting atop the SuperGLUE leaderboard on Oct. 8, 2022, with an average score of 91.3.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
客户评论通常包含有关一个人在线购物体验的大量信息。尽管积极的评论对商店有益,但负面评论将在很大程度上影响消费者的决定,并可能导致销售下降。因此,仔细和有说服力地回答每个负面评论并最大程度地减少其不利影响至关重要。最近的研究考虑利用生成模型来帮助卖家做出回应。但是,此问题并不深入,因为评论可能包含问题的多个方面,这些方面应相应和有说服力地解决。在这项工作中,我们为有说服力的响应生成提出了一个多源多相关生成模型。提出的模型适当地获得和利用了各种信息来源,以产生更有信息和有说服力的响应。提出了一个多方面的细心网络,以自动参与审查中的不同方面,并确保解决大多数问题。在两个现实世界数据集上进行的广泛实验表明,我们的方法优于最先进的方法和在线测试,这证明我们的部署系统大大提高了商店处理负面评论的效率。
translated by 谷歌翻译
在现实世界中,尽管对该领域的兴趣激增,但在稀疏回报协同环境下进行的加强学习仍然具有挑战性。先前的尝试表明,内在的奖励可以减轻稀疏引起的问题。在本文中,我们提出了一种新颖的固有奖励,该奖励受人类学习的启发,因为人类通过将当前的观察结果与历史知识进行比较来评估好奇心。具体而言,我们训练一个自我监督的预测模型,并保存一组模型参数的快照,而不会产生加法培训成本。然后,我们采用核规范来评估不同快照的预测之间的时间不一致,这可以进一步部署为内在的奖励。此外,提出了一种变异的加权机制,以自适应方式将权重分配给不同的快照。我们证明了所提出的方法在各种基准环境中的功效。结果表明,与其他基于奖励的方法相比,我们的方法可以提供压倒性的最先进性能,而不会产生额外的培训成本并保持更高的噪声耐受性。我们的代码将公开发布以提高可重复性。
translated by 谷歌翻译
外部奖励的稀疏性对加强学习(RL)构成了严重的挑战。当前,对好奇心已经做出了许多努力,这些努力可以为有效探索提供代表性的内在奖励。但是,挑战尚未得到解决。在本文中,我们提出了一种名为Dymecu的RL的好奇心,它代表了基于动态记忆的好奇心。受到人类好奇心和信息理论的启发,Dymecu由动态记忆和双重在线学习者组成。好奇心引起的话,如果记忆的信息无法处理当前状态,并且双重学习者之间的信息差距可以作为对代理的内在奖励进行表述,然后可以将这些状态信息巩固到动态内存中。与以前的好奇方法相比,dymecu可以更好地模仿人类的好奇心与动态记忆,并且可以根据双重学习者的引导范式动态地生长内存模块。在包括DeepMind Control Suite和Atari Suite在内的多个基准测试中,进行了大规模的经验实验,结果表明,Dymecu在有或没有外部奖励的情况下优于基于好奇心的方法。我们将发布代码以增强可重复性。
translated by 谷歌翻译
尖峰神经网络是一种神经形态计算,据信可以提高智能水平,并为量子计算提供了前提。在这项工作中,我们通过设计一个光学尖峰神经网络来解决此问题,并证明它可以用于加速计算速度,尤其是在组合优化问题上。在这里,尖峰神经网络是由反对称耦合的退化光学参数振荡器脉冲和耗散脉冲构建的。选择非线性转移函数以减轻幅度不均匀性,并根据峰值神经元的动态行为破坏所得的局部最小值。从数值上证明,尖峰神经网络协会机器在组合优化问题上具有出色的性能,预计将为神经计算和光学计算提供新的应用程序。
translated by 谷歌翻译
脑小血管疾病的成像标记提供了有关脑部健康的宝贵信息,但是它们的手动评估既耗时又受到实质性内部和间际变异性的阻碍。自动化评级可能受益于生物医学研究以及临床评估,但是现有算法的诊断可靠性尚不清楚。在这里,我们介绍了\ textIt {血管病变检测和分割}(\ textit {v textit {where valdo?})挑战,该挑战是在国际医学图像计算和计算机辅助干预措施(MICCAI)的卫星事件中运行的挑战(MICCAI) 2021.这一挑战旨在促进大脑小血管疾病的小而稀疏成像标记的自动检测和分割方法的开发,即周围空间扩大(EPVS)(任务1),脑微粒(任务2)和预先塑造的鞋类血管起源(任务3),同时利用弱和嘈杂的标签。总体而言,有12个团队参与了针对一个或多个任务的解决方案的挑战(任务1 -EPVS 4,任务2 -Microbleeds的9个,任务3 -lacunes的6个)。多方数据都用于培训和评估。结果表明,整个团队和跨任务的性能都有很大的差异,对于任务1- EPV和任务2-微型微型且对任务3 -lacunes尚无实际的结果,其结果尤其有望。它还强调了可能阻止个人级别使用的情况的性能不一致,同时仍证明在人群层面上有用。
translated by 谷歌翻译